60 research outputs found

    A game-based corpus for analysing the interplay between game context and player experience

    Get PDF
    Recognizing players’ affective state while playing video games has been the focus of many recent research studies. In this paper we describe the process that has been followed to build a corpus based on game events and recorded video sessions from human players while playing Super Mario Bros. We present different types of information that have been extracted from game context, player preferences and perception of the game, as well as user features, automatically extracted from video recordings. We run a number of initial experiments to analyse players’ behavior while playing video games as a case study of the possible use of the corpus.peer-reviewe

    Psychophysiology in games

    Get PDF
    Psychophysiology is the study of the relationship between psychology and its physiological manifestations. That relationship is of particular importance for both game design and ultimately gameplaying. Players’ psychophysiology offers a gateway towards a better understanding of playing behavior and experience. That knowledge can, in turn, be beneficial for the player as it allows designers to make better games for them; either explicitly by altering the game during play or implicitly during the game design process. This chapter argues for the importance of physiology for the investigation of player affect in games, reviews the current state of the art in sensor technology and outlines the key phases for the application of psychophysiology in games.The work is supported, in part, by the EU-funded FP7 ICT iLearnRWproject (project no: 318803).peer-reviewe

    Robust validation of Visual Focus of Attention using adaptive fusion of head and eye gaze patterns

    No full text

    A neuro-fuzzy approach to user attention recognition

    No full text
    User attention recognition in front of a monitor or a specific task is a crucial issue in many applications, ranging from e-learning to driving. Visual input is very important when extracting information regarding a user’s attention when recorded with a camera. However, intrusive equipment (special helmets, glasses equipped with cameras recording the eye movements, etc.) impose constraints on users spontaneity, especially when the target group consists of under aged users. In this paper, we propose a system for inferring user attention (state) in front of a computer monitor, only with the usage of a simple camera. The system can be used for real time applications and does not need calibration in terms of camera parameters. It can function under normal lighting conditions and needs no adaptation for each user.keywordshead poseeye gazefacial feature detectionuser attention estimation

    Contrastive Learning with Cross-Modal Knowledge Mining for Multimodal Human Activity Recognition

    No full text
    Human Activity Recognition is a field of research where input data can take many forms. Each of the possible input modalities describes human behaviour in a different way, and each has its own strengths and weaknesses. We explore the hypothesis that leveraging multiple modalities can lead to better recognition. Since manual annotation of input data is expensive and time-consuming, the emphasis is made on self-supervised methods which can learn useful feature representations without any ground truth labels. We extend a number of recent contrastive self-supervised approaches for the task of Human Activity Recognition, leveraging inertial and skeleton data. Furthermore, we propose a flexible, general-purpose framework for performing multimodal self-supervised learning, named Contrastive Multiview Coding with Cross-Modal Knowledge Mining (CMC-CMKM). This framework exploits modality-specific knowledge in order to mitigate the limitations of typical self-supervised frameworks. The extensive experiments on two widely-used datasets demonstrate that the suggested framework significantly outperforms contrastive unimodal and multimodal baselines on different scenarios, including fully-supervised fine-tuning, activity retrieval and semi-supervised learning. Furthermore, it shows performance competitive even compared to supervised methods

    User modeling via gesture and head pose expressivity features

    No full text
    Current work focuses on user modeling in terms of affective analysis that could in turn be used in intelligent personalized interfaces and systems, dynamic profiling and context-aware multimedia applications. The analysis performed within this work comprises of statistical processing and classification of automatically extracted gestural and head pose expressivity features. Computational formulation of qualitative expressive cues of body and head motion is performed and the resulting features are processed statistically, their correlation is studied and finally an emotion recognition attempt is presented based on these features. Significant emotion specific patterns and expressivity features interrelations are derived while the emotion recognition results indicate that the gestural and head pose expressivity features could supplement and enhance a multimodal affective analysis system incorporating an additional modality to be fused with other commonly used modalities such as facial expressions, prosodic and lexical acoustic features and physiological measurements.

    Visual Focus of Attention in Non-calibrated Environments using Gaze Estimation

    No full text
    Abstract Estimating the focus of attention of a person highly depends on her/his gaze directionality. Here, we propose a new method for estimating visual focus of attention using head rotation, as well as fuzzy fusion of head rotation and eye gaze estimates, in a fully automatic manner, without the need for any special hardware or a priori knowledge regarding the user, the environment or the setup. Instead, we propose a system aimed at functioning under unpretending conditions, only with the usage of simple hardware, like a normal web-camera. Our system is aimed at functioning in a human-computer interaction environment, considering a person is facing a monitor with a camera adjusted on top. To this aim, we propose in this paper two novel techniques, based on local and appearance information, estimating head rotation, and we adaptively fuse them in a common framework. The system is able to recognize head rotational movement, under translational movements of the user towards any direction, without any knowledge or a-priori estimate of the user's distance from the camera or camera intrinsic parameters
    • …
    corecore